Goto

Collaborating Authors

 causal rule


How Rules Represent Causal Knowledge: Causal Modeling with Abductive Logic Programs

Rückschloß, Kilian, Weitkämper, Felix

arXiv.org Artificial Intelligence

Pearl observes that causal knowledge enables predicting the effects of interventions, such as actions, whereas descriptive knowledge only permits drawing conclusions from observation. This paper extends Pearl's approach to causality and interventions to the setting of stratified abductive logic programs. It shows how stable models of such programs can be given a causal interpretation by building on philosophical foundations and recent work by Bochman and Eelink et al. In particular, it provides a translation of abductive logic programs into causal systems, thereby clarifying the informal causal reading of logic program rules and supporting principled reasoning about external actions. The main result establishes that the stable model semantics for stratified programs conforms to key philosophical principles of causation, such as causal sufficiency, natural necessity, and irrelevance of unobserved effects. This justifies the use of stratified abductive logic programs as a framework for causal modeling and for predicting the effects of interventions.


ONSEP: A Novel Online Neural-Symbolic Framework for Event Prediction Based on Large Language Model

Yu, Xuanqing, Sun, Wangtao, Li, Jingwei, Liu, Kang, Liu, Chengbao, Tan, Jie

arXiv.org Artificial Intelligence

In the realm of event prediction, temporal knowledge graph forecasting (TKGF) stands as a pivotal technique. Previous approaches face the challenges of not utilizing experience during testing and relying on a single short-term history, which limits adaptation to evolving data. In this paper, we introduce the Online Neural-Symbolic Event Prediction (ONSEP) framework, which innovates by integrating dynamic causal rule mining (DCRM) and dual history augmented generation (DHAG). DCRM dynamically constructs causal rules from real-time data, allowing for swift adaptation to new causal relationships. In parallel, DHAG merges short-term and long-term historical contexts, leveraging a bi-branch approach to enrich event prediction. Our framework demonstrates notable performance enhancements across diverse datasets, with significant Hit@k (k=1,3,10) improvements, showcasing its ability to augment large language models (LLMs) for event prediction without necessitating extensive retraining. The ONSEP framework not only advances the field of TKGF but also underscores the potential of neural-symbolic approaches in adapting to dynamic data environments.


CURLS: Causal Rule Learning for Subgroups with Significant Treatment Effect

Zhou, Jiehui, Yang, Linxiao, Liu, Xingyu, Gu, Xinyue, Sun, Liang, Chen, Wei

arXiv.org Artificial Intelligence

In causal inference, estimating heterogeneous treatment effects (HTE) is critical for identifying how different subgroups respond to interventions, with broad applications in fields such as precision medicine and personalized advertising. Although HTE estimation methods aim to improve accuracy, how to provide explicit subgroup descriptions remains unclear, hindering data interpretation and strategic intervention management. In this paper, we propose CURLS, a novel rule learning method leveraging HTE, which can effectively describe subgroups with significant treatment effects. Specifically, we frame causal rule learning as a discrete optimization problem, finely balancing treatment effect with variance and considering the rule interpretability. We design an iterative procedure based on the minorize-maximization algorithm and solve a submodular lower bound as an approximation for the original. Quantitative experiments and qualitative case studies verify that compared with state-of-the-art methods, CURLS can find subgroups where the estimated and true effects are 16.1% and 13.8% higher and the variance is 12.0% smaller, while maintaining similar or better estimation accuracy and rule interpretability. Code is available at https://osf.io/zwp2k/.


CLMASP: Coupling Large Language Models with Answer Set Programming for Robotic Task Planning

Lin, Xinrui, Wu, Yangfan, Yang, Huanyu, Zhang, Yu, Zhang, Yanyong, Ji, Jianmin

arXiv.org Artificial Intelligence

Large Language Models (LLMs) possess extensive foundational knowledge and moderate reasoning abilities, making them suitable for general task planning in open-world scenarios. However, it is challenging to ground a LLM-generated plan to be executable for the specified robot with certain restrictions. This paper introduces CLMASP, an approach that couples LLMs with Answer Set Programming (ASP) to overcome the limitations, where ASP is a non-monotonic logic programming formalism renowned for its capacity to represent and reason about a robot's action knowledge. CLMASP initiates with a LLM generating a basic skeleton plan, which is subsequently tailored to the specific scenario using a vector database. This plan is then refined by an ASP program with a robot's action knowledge, which integrates implementation details into the skeleton, grounding the LLM's abstract outputs in practical robot contexts. Our experiments conducted on the VirtualHome platform demonstrate CLMASP's efficacy. Compared to the baseline executable rate of under 2% with LLM approaches, CLMASP significantly improves this to over 90%.


Ho

AAAI Conferences

A typical AI system engages many levels of cognitive processing from learning to problem solving. The issue we would like to address in this paper is: Can a unified representational scheme be used in learning processes as well as the various levels of cognitive processing from concept representation to problem solving including the generation of action plans? In a previous paper we defined a set of representations called "atomic operational representations" that employs an explicit representation of the temporal dimension and that can be used to ground concepts in the physical world, such as concepts that involve various activities and interactions. In this paper we apply operational representations in a unified manner to the following cognitive processes: 1) the unsupervised learning and encoding of causal rules of actions and their consequences; and 2) the application of the learned causal rules to problem solving processes that produce desired action plans. The unique and explicit temporal characteristic of operational representations is the key feature that allows the encoded concepts to be used in a unified manner across the various levels of cognitive processing. Hence, abstractions in the form of operational representations have an important role toplay in AI.


Using Human-Guided Causal Knowledge for More Generalized Robot Task Planning

Tatlidil, Semir, Liu, Yanqi, Sheetz, Emily, Bahar, R. Iris, Sloman, Steven

arXiv.org Artificial Intelligence

A major challenge in research involving artificial intelligence (AI) is the development of algorithms that can find solutions to problems that can generalize to different environments and tasks. Unlike AI, humans are adept at finding solutions that can transfer. We hypothesize this is because their solutions are informed by causal models. We propose to use human-guided causal knowledge to help robots find solutions that can generalize to a new environment. We develop and test the feasibility of a language interface that na\"ive participants can use to communicate these causal models to a planner. We find preliminary evidence that participants are able to use our interface and generate causal models that achieve near-generalization. We outline an experiment aimed at testing far-generalization using our interface and describe our longer terms goals for these causal models.


Causal Rule Ensemble: Interpretable Inference of Heterogeneous Treatment Effects

Lee, Kwonsang, Bargagli-Stoffi, Falco J., Dominici, Francesca

arXiv.org Machine Learning

In environmental epidemiology, it is critically important to identify subpopulations that are most vulnerable to the adverse effects of air pollution so we can develop targeted interventions. In recent years, there have been many methodological developments for addressing heterogeneity of treatment effects in causal inference. A common approach is to estimate the conditional average treatment effect (CATE) for a pre-specified covariate set. However, this approach does not provide an easy-to-interpret tool for identifying susceptible subpopulations or discover new subpopulations that are not defined a priori by the researchers. In this paper, we propose a new causal rule ensemble (CRE) method with two features simultaneously: 1) ensuring interpretability by revealing heterogeneous treatment effect structures in terms of decision rules and 2) providing CATE estimates with high statistical precision similar to causal machine learning algorithms. We provide theoretical results that guarantee consistency of the estimated causal effects for the newly discovered causal rules. Furthermore, via simulations, we show that the CRE method has competitive performance on its ability to discover subpopulations and then accurately estimate the causal effects. We also develop a new sensitivity analysis method that examine robustness to unmeasured confounding bias. Lastly, we apply the CRE method to the study of the effects of long-term exposure to air pollution on the 5-year mortality rate of the New England Medicare-enrolled population in United States. Code is available at https://github.com/kwonsang/causal_rule_ensemble.


Discovering Reliable Causal Rules

Budhathoki, Kailash, Boley, Mario, Vreeken, Jilles

arXiv.org Artificial Intelligence

We study the problem of deriving policies, or rules, that when enacted on a complex system, cause a desired outcome. Absent the ability to perform controlled experiments, such rules have to be inferred from past observations of the system's behaviour. This is a challenging problem for two reasons: First, observational effects are often unrepresentative of the underlying causal effect because they are skewed by the presence of confounding factors. Second, naive empirical estimations of a rule's effect have a high variance, and, hence, their maximisation can lead to random results. To address these issues, first we measure the causal effect of a rule from observational data---adjusting for the effect of potential confounders. Importantly, we provide a graphical criteria under which causal rule discovery is possible. Moreover, to discover reliable causal rules from a sample, we propose a conservative and consistent estimator of the causal effect, and derive an efficient and exact algorithm that maximises the estimator. On synthetic data, the proposed estimator converges faster to the ground truth than the naive estimator and recovers relevant causal rules even at small sample sizes. Extensive experiments on a variety of real-world datasets show that the proposed algorithm is efficient and discovers meaningful rules.


Evaluating the Apperception Engine

Evans, Richard, Hernandez-Orallo, Jose, Welbl, Johannes, Kohli, Pushmeet, Sergot, Marek

arXiv.org Artificial Intelligence

The Apperception Engine is an unsupervised learning system. Given a sequence of sensory inputs, it constructs a symbolic causal theory that both explains the sensory sequence and also satisfies a set of unity conditions. The unity conditions insist that the constituents of the theory - objects, properties, and laws - must be integrated into a coherent whole. Once a theory has been constructed, it can be applied to predict future sensor readings, retrodict earlier readings, or impute missing readings. In this paper, we evaluate the Apperception Engine in a diverse variety of domains, including cellular automata, rhythms and simple nursery tunes, multi-modal binding problems, occlusion tasks, and sequence induction intelligence tests. In each domain, we test our engine's ability to predict future sensor values, retrodict earlier sensor values, and impute missing sensory data. The engine performs well in all these domains, significantly outperforming neural net baselines and state of the art inductive logic programming systems. These results are significant because neural nets typically struggle to solve the binding problem (where information from different modalities must somehow be combined together into different aspects of one unified object) and fail to solve occlusion tasks (in which objects are sometimes visible and sometimes obscured from view). We note in particular that in the sequence induction intelligence tests, our system achieved human-level performance. This is notable because our system is not a bespoke system designed specifically to solve intelligence tests, but a general-purpose system that was designed to make sense of any sensory sequence.


Learning Post-Hoc Causal Explanations for Recommendation

Xu, Shuyuan, Li, Yunqi, Liu, Shuchang, Fu, Zuohui, Zhang, Yongfeng

arXiv.org Artificial Intelligence

State-of-the-art recommender systems have the ability to generate high-quality recommendations, but usually cannot provide intuitive explanations to humans due to the usage of black-box prediction models. The lack of transparency has highlighted the critical importance of improving the explainability of recommender systems. In this paper, we propose to extract causal rules from the user interaction history as post-hoc explanations for the black-box sequential recommendation mechanisms, whilst maintain the predictive accuracy of the recommendation model. Our approach firstly achieves counterfactual examples with the aid of a perturbation model, and then extracts personalized causal relationships for the recommendation model through a causal rule mining algorithm. Experiments are conducted on several state-of-the-art sequential recommendation models and real-world datasets to verify the performance of our model on generating causal explanations. Meanwhile, We evaluate the discovered causal explanations in terms of quality and fidelity, which show that compared with conventional association rules, causal rules can provide personalized and more effective explanations for the behavior of black-box recommendation models.